38 research outputs found

    Testing Method for Multi-UAV Conflict Resolution Using Agent-Based Simulation and Multi-Objective Search

    Get PDF
    A new approach to testing multi-UAV conflict resolution algorithms is presented. The problem is formulated as a multi-objective search problem with two objectives: finding air traffic encounters that 1) are able to reveal faults in conflict resolution algorithms and 2) are likely to happen in the real world. The method uses agent-based simulation and multi-objective search to automatically find encounters satisfying these objectives. It describes pairwise encounters in three-dimensional space using a parameterized geometry representation, which allows encounters involving multiple UAVs to be generated by combining several pairwise encounters. The consequences of the encounters, given the conflict resolution algorithm, are explored using a fast-time agent-based simulator. To find encounters meeting the two objectives, a genetic algorithm approach is used. The method is applied to test ORCA-3D, a widely cited open-source multi-UAV conflict resolution algorithm, and the method’s performance is compared with a plausible random testing approach. The results show that the method can find the required encounters more efficiently than the random search. The identified safety incidents are then the starting points for understanding limitations of the conflict resolution algorithm

    On the Validation of a UAV Collision Avoidance System Developed by Model-Based Optimization: : Challenges and a Tentative Partial Solution

    Get PDF
    The development of the new generation of airborne collision avoidance system ACAS X adopts a model-based optimization approach, where the collision avoidance logic is automatically generated based on a probabilistic model and a set of preferences. It has the potential for safety benefits and shortening the development cycle, but it poses new challenges for safety assurance. In this paper, we introduce the new development process and explain its key ideas using a simple collision avoidance example. Based on this explanation, we analyze the challenges it poses to safety assurance, with a particular focus on system validation. We then propose a Genetic-Algorithm-based approach that can efficiently search for undesired situations to help the development and validation of the system. We introduce an open-source tool we have developed to support this approach and demonstrate it on searching for challenging situations for ACAS XU

    Enhancing Covid-19 Decision-Making by Creating an Assurance Case for Simulation Models

    Full text link
    Simulation models have been informing the COVID-19 policy-making process. These models, therefore, have significant influence on risk of societal harms. But how clearly are the underlying modelling assumptions and limitations communicated so that decision-makers can readily understand them? When making claims about risk in safety-critical systems, it is common practice to produce an assurance case, which is a structured argument supported by evidence with the aim to assess how confident we should be in our risk-based decisions. We argue that any COVID-19 simulation model that is used to guide critical policy decisions would benefit from being supported with such a case to explain how, and to what extent, the evidence from the simulation can be relied on to substantiate policy conclusions. This would enable a critical review of the implicit assumptions and inherent uncertainty in modelling, and would give the overall decision-making process greater transparency and accountability.Comment: 6 pages and 2 figure

    Fixing the cracks in the crystal ball : a maturity model for quantitative risk assessment

    Get PDF
    Quantitative risk assessment (QRA) is widely practiced in system safety, but there is insufficient evidence that QRA in general is fit for purpose. Defenders of QRA draw a distinction between poor or misused QRA and correct, appropriately used QRA, but this distinction is only useful if we have robust ways to identify the flaws in an individual QRA. In this paper we present a comprehensive maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature and in a collection of risk assessment peer reviews. We provide initial validation of the completeness and realism of the model. Our risk assessment maturity model provides a way to prioritise both process development within an organisation and empirical research within the QRA community

    Introducing Autonomous Systems into Operation : How the SMS has to Change

    Get PDF
    When an autonomous system is deployed into a specific environment there may be new safety risks introduced. These could include risks due to staff interacting with the new system in unsafe ways (e.g. getting too close), risks to infrastructure (e.g. collisions with maintenance equipment), and also risks to the environment (e.g. due to increased traffic flows). Hence changes must be made to the local Safety Management System (SMS) governing how the system is deployed, oper-ated, maintained and disposed of within its operating context. This includes how the operators, maintainers, emergency services and accident investigators have to work to new practices and develop new skills. They may also require new ap-proaches, tools and techniques to do their jobs. It is also noted that many auton-omous systems (for example aerial drones or self-driving shuttles) may come with a generic product-based safety justification, comprising a safety case and opera-tional information (e.g. manuals) that may need tailoring or adapting to each de-ployment environment. This adaptation may be done, in part, via the SMS. This paper focusses on these deployment and adaptation issues, highlighting changes to working processes and practices

    Towards a Framework for Safety Assurance of Autonomous Systems

    Get PDF
    Autonomous systems have the potential to provide great benefit to society. However, they also pose problems for safety assurance, whether fully auton-omous or remotely operated (semi-autonomous). This paper discusses the challenges of safety assur-ance of autonomous systems and proposes a novel framework for safety assurance that, inter alia, uses machine learning to provide evidence for a system safety case and thus enables the safety case to be updated dynamically as system behaviour evolves

    Safe, Ethical and Sustainable : Framing the Argument

    Get PDF
    The authors have previously articulated the need to think beyond safety to encom-pass ethical and environmental (sustainability) concerns, and to address these concerns through the medium of argumentation. However, the scope of concerns is very large and there are other challenges such as the need to make trade-offs between incommensurable concerns. The paper outlines an approach to these challenges through suitably framing the argument and illustrates the approach by considering alternative concept designs for an autonomous mobility service

    Enhancing COVID-19 decision making by creating an assurance case for epidemiological models

    Get PDF
    When the UK government was first confronted with the very real threat of a COVID-19 pandemic, policy-makers turned quickly, and initially almost exclusively, to scientific data provided by epidemiological models. These models have had a direct and significant influence on the policies and decisions, such as social distancing and closure of schools, which aim to reduce the risk posed by COVID-19 to public health.1 The models suggested that depending on the strategies chosen, the number of deaths could vary by hundreds of thousands. From a safety engineering perspective, it is clear that the data generated by epidemiological models are safety critical, and that, therefore, the models themselves should be regarded as safety-critical systems

    The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

    Get PDF
    Established approaches to assuring safety-critical systems and software are difficult to apply to systems employing ML where there is no clear, pre-defined specification against which to assess validity. This problem is exacerbated by the opaque nature of ML where the learnt model is not amenable to human scrutiny. XAI methods have been proposed to tackle this issue by producing human-interpretable representations of ML models which can help users to gain confidence and build trust in the ML system. However, little work explicitly investigates the role of explainability for safety assurance in the context of ML development. This paper identifies ways in which XAI methods can contribute to safety assurance of ML-based systems. It then uses a concrete ML-based clinical decision support system, concerning weaning of patients from mechanical ventilation, to demonstrate how XAI methods can be employed to produce evidence to support safety assurance. The results are also represented in a safety argument to show where, and in what way, XAI methods can contribute to a safety case. Overall, we conclude that XAI methods have a valuable role in safety assurance of ML-based systems in healthcare but that they are not sufficient in themselves to assure safety
    corecore